9 research outputs found

    On Penalty Methods for Nonconvex Bilevel Optimization and First-Order Stochastic Approximation

    Full text link
    In this work, we study first-order algorithms for solving Bilevel Optimization (BO) where the objective functions are smooth but possibly nonconvex in both levels and the variables are restricted to closed convex sets. As a first step, we study the landscape of BO through the lens of penalty methods, in which the upper- and lower-level objectives are combined in a weighted sum with penalty parameter σ>0\sigma > 0. In particular, we establish a strong connection between the penalty function and the hyper-objective by explicitly characterizing the conditions under which the values and derivatives of the two must be O(σ)O(\sigma)-close. A by-product of our analysis is the explicit formula for the gradient of hyper-objective when the lower-level problem has multiple solutions under minimal conditions, which could be of independent interest. Next, viewing the penalty formulation as O(σ)O(\sigma)-approximation of the original BO, we propose first-order algorithms that find an ϵ\epsilon-stationary solution by optimizing the penalty formulation with σ=O(ϵ)\sigma = O(\epsilon). When the perturbed lower-level problem uniformly satisfies the small-error proximal error-bound (EB) condition, we propose a first-order algorithm that converges to an ϵ\epsilon-stationary point of the penalty function, using in total O(ϵ3)O(\epsilon^{-3}) and O(ϵ7)O(\epsilon^{-7}) accesses to first-order (stochastic) gradient oracles when the oracle is deterministic and oracles are noisy, respectively. Under an additional assumption on stochastic oracles, we show that the algorithm can be implemented in a fully {\it single-loop} manner, i.e., with O(1)O(1) samples per iteration, and achieves the improved oracle-complexity of O(ϵ3)O(\epsilon^{-3}) and O(ϵ5)O(\epsilon^{-5}), respectively

    Tractable Optimality in Episodic Latent MABs

    Full text link
    We consider a multi-armed bandit problem with MM latent contexts, where an agent interacts with the environment for an episode of HH time steps. Depending on the length of the episode, the learner may not be able to estimate accurately the latent context. The resulting partial observation of the environment makes the learning task significantly more challenging. Without any additional structural assumptions, existing techniques to tackle partially observed settings imply the decision maker can learn a near-optimal policy with O(A)HO(A)^H episodes, but do not promise more. In this work, we show that learning with {\em polynomial} samples in AA is possible. We achieve this by using techniques from experiment design. Then, through a method-of-moments approach, we design a procedure that provably learns a near-optimal policy with O(poly(A)+poly(M,H)min(M,H))O(\texttt{poly}(A) + \texttt{poly}(M,H)^{\min(M,H)}) interactions. In practice, we show that we can formulate the moment-matching via maximum likelihood estimation. In our experiments, this significantly outperforms the worst-case guarantees, as well as existing practical methods.Comment: NeurIPS 202

    Clustering and Classification [Project Title from Cover]

    No full text
    DTRT13-G-UTC58The Expectation-Maximization algorithm is perhaps the most broadly used algorithm for inference of latent variable problems. A theoretical understanding of its performance, however, largely remains lacking. Recent results established that EM enjoys global convergence for Gaussian Mixture Models. For Mixed Regression, however, only local convergence results have been established, and those only for the high SNR regime. We show here that EM converges for mixed linear regression with two components (it is known not to converge for three or more), and moreover that this convergence holds for random initialization
    corecore